14 research outputs found

    Design and User Satisfaction of Interactive Maps for Visually Impaired People

    Get PDF
    Multimodal interactive maps are a solution for presenting spatial information to visually impaired people. In this paper, we present an interactive multimodal map prototype that is based on a tactile paper map, a multi-touch screen and audio output. We first describe the different steps for designing an interactive map: drawing and printing the tactile paper map, choice of multi-touch technology, interaction technologies and the software architecture. Then we describe the method used to assess user satisfaction. We provide data showing that an interactive map - although based on a unique, elementary, double tap interaction - has been met with a high level of user satisfaction. Interestingly, satisfaction is independent of a user's age, previous visual experience or Braille experience. This prototype will be used as a platform to design advanced interactions for spatial learning

    Integrality and Separability of Multitouch Interaction Techniques in 3D Manipulation Tasks

    No full text

    Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments

    No full text

    Effect of Control-Display Gain and Mapping and Use of Armrests on Accuracy in Temporally Limited Touchless Gestural Steering Tasks

    No full text
    Touchless gestural controls are becoming an important natural input technique for interaction with emerging virtual environments but design parameters that improve task performance while at the same time reduce user fatigue require investigation. This experiment aims to understand how control-display (CD) parameters such as gain and mapping as well as the use of armrests affect gesture accuracy in specific movement directions. Twelve participants completed temporally constrained two-dimensional steering tasks using free-hand fingertip gestures in several conditions. Use of an armrest, increased CD gain, and horizontal mapping significantly reduced success rate. The results show that optimal transfer functions for gestures will depend on the movement direction as well as arm support features

    Moment-to-Moment Detection of Internal Thought from Eye Vergence Behaviour

    No full text
    Internal thought refers to the process of directing attention away from a primary visual task to internal cognitive processing. Internal thought is a pervasive mental activity and closely related to primary task performance. As such, automatic detection of internal thought has significant potential for user modelling in intelligent interfaces, particularly for e-learning applications. Despite the close link between the eyes and the human mind, only a few studies have investigated vergence behaviour during internal thought and none has studied moment-to-moment detection of internal thought from gaze. While prior studies relied on long-term data analysis and required a large number of gaze characteristics, we describe a novel method that is computationally light-weight and that only requires eye vergence information that is readily available from binocular eye trackers. We further propose a novel paradigm to obtain ground truth internal thought annotations that exploits human blur perception. We evaluate our method for three increasingly challenging detection tasks: (1) during a controlled math-solving task, (2) during natural viewing of lecture videos, and (3) during daily activities, such as coding, browsing, and reading. Results from these evaluations demonstrate the performance and robustness of vergence-based detection of internal thought and, as such, open up new directions for research on interfaces that adapt to shifts of mental attention

    Evaluating Eyegaze Targeting to Improve Mouse Pointing for Radiology Tasks

    No full text
    In current radiologists’ workstations, a scroll mouse is typically used as the primary input device for navigating image slices and conducting operations on an image. Radiological analysis and diagnosis rely on careful observation and annotation of medical images. During analysis of 3D MRI and CT volumes, thousands of mouse clicks are performed everyday, which can cause wrist fatigue. This paper presents a dynamic control-to-display (C-D) gain mouse movement method, controlled by an eyegaze tracker as the target predictor. By adjusting the C-D gain according to the distance to the target, the mouse click targeting time is reduced. Our theoretical and experimental studies show that the mouse movement time to a known target can be reduced by up to 15%. We also present an experiment with 12 participants to evaluate the role of eyegaze targeting in the realistic situation of unknown target positions. These results indicate that using eyegaze to predict the target position, the dynamic C-D gain method can improve pointing performance by 8% and reduce the error rate over traditional mouse movement

    On the limits of the human motor control precision: the search for a device’s human resolution

    No full text
    Abstract. Input devices are often evaluated in terms of their throughput, as measured by Fitts ' Law, and by their resolution. However, little effort has been made to understand the limit of resolution that is controllable or “usable ” by the human using the device. What is the point of a 5000 dpi computer mouse if the human motor control system is far from being able to achieve this level of precision? This paper introduces the concept of a Device's Human Resolution (DHR): the smallest target size that users can acquire with an ordinary amount of effort using one particular device. We report on our attempt to find the DHR through a target acquisition experiment involving very small target sizes. Three devices were tested: a gaming mouse (5700 dpi), a PHANTOM (450 dpi), and a freespace device (85 dpi). The results indicate a decrease in target acquisition performance that is not predicted by Fitts ' Law when target sizes become smaller than certain levels. In addition, the experiment shows that the actual achievable resolution varies greatly depending on the input device used, hence the need to include the “device ” in the definition of DHR
    corecore